Mixed-Initiative Assistant for Modeling Expert’s Reasoning
نویسندگان
چکیده
This paper presents a mixed-initiative assistant that helps a subject matter expert to express the way she solves problems in the task reduction paradigm. It guides the expert to follow a predefined modeling methodology, supports the expert to express her reasoning by using natural language with references to the objects from the agent’s ontology, and helps her in the process of specifying solutions to new problems by analogy with previously solved problems. The assistant, which is integrated into the Disciple system for agents development, has been successfully evaluated by subject matter experts at the US Army War College. 1 Instructable agents For many years we have researched a general theory, a methodology, and a family of tools, called Disciple, for the rapid development of knowledge-based agents by subject matter experts, with limited assistance from knowledge engineers (Tecuci, 1988, 98; Boicu 2002). The short-term goal of this research is to overcome the knowledge acquisition bottleneck in the development of expert and decision-support systems (Buchanan and Wilkins, 1993). The long-term goal of this research is to develop the technology that will allow non-computer scientists to develop their own cognitive assistants that incorporate their subject matter expertise and can support them in their regular problem solving and decision-making activity. The main idea of our approach to achieve these goals consists in developing an instructable (learning) agent that can be taught directly by a subject matter expert to become a knowledge-based assistant. The expert will teach the agent how to perform problem solving tasks in a way that is similar to how the expert would teach a person. That is, the expert will teach the agent by providing it examples on how to solve specific problems, helping it to understand the solutions, and supervising and correcting the problem solving behavior of the agent. The agent will learn from the expert by generalizing the examples and building its knowledge base. In essence, the goal is to create a synergism between the expert that has the knowledge to be formalized and the agent that knows how to formalize it. Copyright © 2005, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. This is achieved through: • mixed-initiative problem solving, where the expert solves the more creative parts of the problem and the agent solves the more routine ones; • integrated learning and teaching, where the expert helps the agent to learn (for instance, by providing examples, hints and explanations), and the agent helps the expert to teach it (for instance, by asking relevant questions); • multistrategy learning (Michalski and Tecuci, 1994), where the agent integrates complementary strategies, such as learning from examples, learning from explanations, and learning by analogy, to learn general concepts and rules. In order to teach an agent how to solve problems, the expert has first to be able to make explicit the way he or she reasons, in a way that is formal and precise enough for the agent to learn from it. The process of informally expressing the expert’s problem solving process for a given problem, using a problem-solving paradigm, and a given methodology, is called modeling expert’s reasoning process. Our experience shows that modeling is the single most difficult agent training activity for the expert. This is not surprising because it primary involves human creativity and often requires the extension of the agent domain language. Moreover, the description of the problem solving process has to be formal and precise enough, both to facilitate agent’s learning, and to assure that the learned knowledge is applicable to other situations. A Disciple agent uses task-reduction as the main problem solving paradigm. In this paradigm, a problem solving task is successively reduced to simpler tasks, the solutions of the simplest tasks are found, and these solutions are successively composed into the solution of the initial task. The knowledge base of the agent is structured into an object ontology that represents the objects from an application domain, and a set of task reduction rules and solution composition rules expressed with these objects. To develop a Disciple agent for a specific application domain, one needs to define the ontology for that domain and then to teach the agent how to perform various tasks, in a way that resembles how one would teach a human apprentice. This requires the expert to consider specific problems and to show the agent how to solve them by following the task reduction paradigm. As mentioned above, this modeling process is very complex and the question is how to develop an assistant that can help the expert to perform it. One idea is to define a simple modeling methodology and associated guidelines which the expert can easily follow to express her reasoning in the task reduction paradigm (Bowman, 2002). At the same time develop mixed-initiative methods to help the expert follow the methodology. Another idea is to allow the expert to express her reasoning in a language that combines natural language with references to the objects from the agent’s ontology. This, in turn, requires the modeling assistant to help the expert in identifying the objects from the knowledge base she wants to refer to. Yet another idea is to help the expert in the process of specifying the solutions to new problems, by analogy with previously defined solutions. All these ideas are at the basis of the mixed-initiative modeling assistant integrated into the Disciple system, as described in the rest of this paper. 2 Modeling expert’s reasoning process We have developed a simple and intuitive modeling language in which the expert, with the help of the modeling assistant, expresses the way she is solving a specific problem, using natural language, as if the expert would think aloud, as illustrated in Figure 1 (Bowman, 2002). We need to “Assess whether President-Roosevelt has means to be protected.” In order to perform this assessment task, the expert and the agent will ask themselves a series of questions. The answer to each question will lead to the reduction of the current assessment task to simpler assessment tasks. The first question asked is: “What is a means of President Roosevelt to be protected from close physical threats?” The answer, “US Secret Service 1943,” leads to the reduction of the above task to the task “Test whether US Secret Service 1943 has any significant vulnerability.” In general, the question associated with a task considers some relevant piece of information for solving that task. The answer identifies that piece of information and leads to the reduction of the task to one or several simpler tasks. Alternative questions correspond to alternative approaches to solving the current problem solving task. Several answers to a question correspond to several potential solutions. The modeling language includes many helpful guidelines for the expert, such as: Ask small, incremental questions that are likely to have a single category of answer (but not necessarily a single answer). This usually means ask who, or what, or where, or what kind of, or is this or that etc., not complex questions such as who and what, or what and where. The expert expresses her reasoning in natural language, but the modeling assistant provides her with helpful and non-disruptive mechanisms for automatic identification of the knowledge base elements in her phrases. In particular, the modeling assistant has an effective word completion capability. When the expert types a few characters of a phrase, such as, “means to be protected” (see Figure 1), it proposes all the partially matching names from the knowledge base, ordered by their plausibility, to be used in the current context, including “means_to_be_protected.” The expert selects this name only because it is simpler than typing it. However, now the system also partially “understands” the English sentence entered by the expert, which will significantly facilitate their collaboration. Figure 1: A sequence of two task reductions steps 3 Modeling Assistant interface Figure 2 shows the interface of the modeling assistant. The middle part of the screen contains the current task reduction step that the expert is composing. At each state in this process, the right hand side of the screen shows all the actions that could be performed in that state, and the left hand side shows the action that the modeling assistant is actually recommending. For instance, to specify the current subtask, the advisor suggested the expert to copy Figure 2: Modeling Adviser interface Test whether US_Secret_Service_1943 has any significant vulnerability. Test whether President_Roosevelt has means_to_be_protected. What is a means of President_Roosevelt to be_protected from close physical threats? Question:
منابع مشابه
Agent Learning for Mixed-Initiative Knowledge Acquisition
This research has advanced the Disciple learning and problem solving theory for rapid development and maintenance of adaptable cognitive assistants in uncertain and dynamic environments. These assistants can capture, use, preserve, and transfer to other users the subject matter expertise which currently takes years to establish, is lost when experts separate from service, and is costly to repla...
متن کاملMixed-Initiative Systems for Collaborative Problem Solving
to building intelligent systems that can collaborate naturally and effectively with people. But true collaborative behavior requires an agent to possess a number of capabilities, including reasoning, communication, planning, execution, and learning. We describe an integrated approach to the design and implementation of a collaborative problem-solving assistant based on a formal theory of joint ...
متن کاملModeling the Human Operator’s Cognitive Process to Enable Assistant System Decisions
Human operators in human-machine systems can be supported by assistant systems in order to avoid and resolve critical workload peaks. The decisions of such an assistant system should at best be based on the current and anticipated situation (e.g. mission progress) as well as on the current and anticipated cognitive state of the operator, which includes his/her beliefs, goals, plan, intended act...
متن کاملCollaborative knowledge discovery with Pasteur and Filter: a case of mixed-initiative intelligence
This paper is based on the design of a system for collaborative knowledge discovery, in a situation where both some data and a domain expert are available. This system is composed of two elements: a data-mining algorithm (Pasteur) producing association rules organized in graphs, and a module (Filter) for collection, refinement and use of expert’s comments on the algorithm’s output. The two put ...
متن کاملTRAINS-95: Towards a Mixed-Initiative Planning Assistant
We have been examining mixed-initiative planning systems in the context of command and control or logistical overview situations. In such environments, the human and the computer must work together in a very tightly coupled way to solve problems that neither alone could manage. In this paper, we describe our implementation of a prototype version of such a system, TRAINS-95, which helps a manage...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005